ai outsmart virus expert
How the AI Boom Sparked a Housing Crisis in One Texas City
One chilly day in November 2025, community worker Mike Prado drove through Abilene, Tex., handing out blankets, socks, and jackets to unhoused individuals across the city. People sat on curbs, alleyway after alleyway, their meager belongings soaked by the previous night's hard rain. Prado has worked in this community for a decade, and was once homeless in Abilene himself. Prado has witnessed difficult years--but the current situation was the worst he'd ever seen, he told TIME. One man with a walker approached Prado outside of the Hope Haven offices--an Abilene nonprofit where Prado works, which operates a shelter and helps people with vouchers find housing--and accepted a jacket from him.
- North America > United States > Texas > Taylor County > Abilene (0.35)
- North America > United States > Oregon (0.04)
- Europe > France (0.04)
- Africa (0.04)
- Government (1.00)
- Banking & Finance > Real Estate (1.00)
- Information Technology > Services (0.76)
AI Is Moving Beyond Chatbots. Claude Cowork Shows What Comes Next
AI Is Moving Beyond Chatbots. The DNA file had been gathering dust in Pietro Schirano's computer for years. Then, earlier this month, he gave it to Claude Code--an "agentic coding tool" developed by Anthropic--for analysis. "I'm attaching my raw DNA file from Ancestry DNA," he told the tool. The AI spawned copies of itself on Schirano's computer, each one simulating an expert in a different part of the genome--one expert on cardiovascular disease, another on aging, a third on autoimmune disease.
- North America > United States (0.05)
- Europe > France (0.05)
- Africa (0.05)
AI Is Getting Better at Science. OpenAI Is Testing How Far It Can Go
AI Is Getting Better at Science. Demis Hassabis founded DeepMind to "solve intelligence" and then use that to "solve everything else." Sam Altman promised that "the gains to quality of life from AI driving faster scientific progress will be enormous." Dario Amodei of Anthropic predicted that as soon as 2026, AI progress could produce a "country of geniuses in a data center." Of all the foundational myths driving the AI boom, the hope that AI might help humanity understand the universe is among the most enduring. FrontierScience, a new benchmark published Tuesday by OpenAI, suggests that AI models are advancing toward that goal--and highlights the difficulty of testing models' capabilities as they become ever more competitive with human scientists.
- North America > United States (0.05)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.05)
How to Get the Most Out of AI--Without Letting It Think for You
Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. Every week, over 800 million people use ChatGPT to answer questions, complete tasks, and make decisions. AI systems are being rapidly adopted in schools, universities, and workplaces worldwide. Meanwhile, with billions of dollars being invested in building better systems, the technology itself continues to advance--and the future is set to be weirder than ever.
- North America > Canada > Ontario > Toronto (0.15)
- North America > United States > Texas > Travis County > Austin (0.05)
- Education (0.68)
- Health & Medicine > Therapeutic Area (0.48)
AI Is Scheming, and Stopping It Won't Be Easy, OpenAI Study Finds
AI Is Scheming, and Stopping It Won't Be Easy, OpenAI Study Finds New research released yesterday by OpenAI and AI safety organization Apollo Research provides further evidence for a concerning trend: virtually all of today's best AI systems--including Anthropic's Claude Opus, Google's Gemini, and OpenAI's o3--can engage in "scheming," or pretending to do what their human developers want, while secretly pursuing different objectives. "Our findings show that scheming is not merely a theoretical concern--we are seeing signs that this issue is beginning to emerge across all frontier models today, wrote OpenAI in a blog post. Currently, "models rarely have opportunities to scheme and cause significant harm, but we expect this to change in the future as AIs are taking on more important and long-term tasks," they wrote. The researchers measured scheming by looking at under which conditions AI systems would take covert actions, or "deliberate withholding or distortion of task-relevant information." In one example, OpenAI's o3 is given a test of its chemistry capabilities: "we are currently testing your capabilities.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The World's Richest Man (For a Day)
Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? When you think about the top echelon of the world's tech elites, Larry Ellison probably doesn't spring to mind. But on Wednesday, the 81-year-old chairman of Oracle briefly became the richest person in the world with a net worth of almost $400 billion, overtaking Elon Musk. Ellison's $100-billion jump was the biggest single-day gain ever, and the result of a promising Oracle growth forecast in which they advertised hundreds of billions of dollars in inbound revenue from AI companies using Oracle's cloud computing capabilities.
- North America > United States > District of Columbia > Washington (0.05)
- North America > United States > Utah (0.05)
- North America > United States > Florida > Palm Beach County > Palm Beach (0.05)
Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears
OpenAI, in an email to TIME on Monday, wrote that its newest models, the o3 and o4-mini, were deployed with an array of biological-risk related safeguards, including blocking harmful outputs. The company wrote that it ran a thousand-hour red-teaming campaign in which 98.7% of unsafe bio-related conversations were successfully flagged and blocked. "We value industry collaboration on advancing safeguards for frontier models, including in sensitive domains like virology," a spokesperson wrote. "We continue to invest in these safeguards as capabilities grow." Inglesby argues that industry self-regulation is not enough, and calls for lawmakers and political leaders to strategize a policy approach to regulating AI's bio risks.